Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Abstract The rapid advancement of large-scale cosmological simulations has opened new avenues for cosmological and astrophysical research. However, the increasing diversity among cosmological simulation models presents a challenge to therobustness. In this work, we develop the Model-Insensitive ESTimator (Miest), a machine that canrobustlyestimate the cosmological parameters, Ωmandσ8, from neural hydrogen maps of simulation models in the Cosmology and Astrophysics with MachinE Learning Simulations project—IllustrisTNG,SIMBA, Astrid, and SWIFT-Eagle. An estimator is consideredrobustif it possesses a consistent predictive power across all simulations, including those used during the training phase. We train our machine using multiple simulation models and ensure that it only extracts common features between the models while disregarding the model-specific features. This allows us to develop a novel model that is capable of accurately estimating parameters across a range of simulation models, without being biased toward any particular model. Upon the investigation of the latent space—a set of summary statistics, we find that the implementation ofrobustnessleads to the blending of latent variables across different models, demonstrating the removal of model-specific features. In comparison to a standard machine lackingrobustness, the average performance of Mieston the unseen simulations during the training phase has been improved by ∼17% for Ωmand 38% forσ8. By using a machine learning approach that can extractrobust, yet physical features, we hope to improve our understanding of galaxy formation and evolution in a (subgrid) model-insensitive manner, and ultimately, gain insight into the underlying physical processes responsible forrobustness.more » « lessFree, publicly-accessible full text available September 19, 2026
-
Abstract We present a proof-of-concept simulation-based inference on Ωmandσ8from the Sloan Digital Sky Survey (SDSS) Baryon Oscillation Spectroscopic Survey (BOSS) LOWZ Northern Galactic Cap (NGC) catalog using neural networks and domain generalization techniques without the need of summary statistics. Using rapid light-cone simulations L-picola, mock galaxy catalogs are produced that fully incorporate the observational effects. The collection of galaxies is fed as input to a point cloud-based network,Minkowski-PointNet. We also add relatively more accurate Gadgetmocks to obtain robust and generalizable neural networks. By explicitly learning the representations that reduce the discrepancies between the two different data sets via the semantic alignment loss term, we show that the latent space configuration aligns into a single plane in which the two cosmological parameters form clear axes. Consequently, during inference, the SDSS BOSS LOWZ NGC catalog maps onto the plane, demonstrating effective generalization and improving prediction accuracy compared to non-generalized models. Results from the ensemble of 25 independently trained machines find Ωm= 0.339 ± 0.056 andσ8= 0.801 ± 0.061, inferred only from the distribution of galaxies in the light-cone slices without relying on any indirect summary statistics. A single machine that best adapts to the Gadgetmocks yields a tighter prediction of Ωm= 0.282 ± 0.014 andσ8= 0.786 ± 0.036. We emphasize that adaptation across multiple domains can enhance the robustness of the neural networks in observational data.more » « less
-
Abstract We present CAMELS-ASTRID, the third suite of hydrodynamical simulations in the Cosmology and Astrophysics with MachinE Learning (CAMELS) project, along with new simulation sets that extend the model parameter space based on the previous frameworks of CAMELS-TNG and CAMELS-SIMBA, to provide broader training sets and testing grounds for machine-learning algorithms designed for cosmological studies. CAMELS-ASTRID employs the galaxy formation model following the ASTRID simulation and contains 2124 hydrodynamic simulation runs that vary three cosmological parameters (Ωm,σ8, Ωb) and four parameters controlling stellar and active galactic nucleus (AGN) feedback. Compared to the existing TNG and SIMBA simulation suites in CAMELS, the fiducial model of ASTRID features the mildest AGN feedback and predicts the least baryonic effect on the matter power spectrum. The training set of ASTRID covers a broader variation in the galaxy populations and the baryonic impact on the matter power spectrum compared to its TNG and SIMBA counterparts, which can make machine-learning models trained on the ASTRID suite exhibit better extrapolation performance when tested on other hydrodynamic simulation sets. We also introduce extension simulation sets in CAMELS that widely explore 28 parameters in the TNG and SIMBA models, demonstrating the enormity of the overall galaxy formation model parameter space and the complex nonlinear interplay between cosmology and astrophysical processes. With the new simulation suites, we show that building robust machine-learning models favors training and testing on the largest possible diversity of galaxy formation models. We also demonstrate that it is possible to train accurate neural networks to infer cosmological parameters using the high-dimensional TNG-SB28 simulation set.more » « less
-
Abstract In a novel approach employing implicit likelihood inference (ILI), also known as likelihood-free inference, we calibrate the parameters of cosmological hydrodynamic simulations against observations, which has previously been unfeasible due to the high computational cost of these simulations. For computational efficiency, we train neural networks as emulators on ∼1000 cosmological simulations from the CAMELS project to estimate simulated observables, taking as input the cosmological and astrophysical parameters, and use these emulators as surrogates for the cosmological simulations. Using the cosmic star formation rate density (SFRD) and, separately, the stellar mass functions (SMFs) at different redshifts, we perform ILI on selected cosmological and astrophysical parameters (Ωm,σ8, stellar wind feedback, and kinetic black hole feedback) and obtain full six-dimensional posterior distributions. In the performance test, the ILI from the emulated SFRD (SMFs) can recover the target observables with a relative error of 0.17% (0.4%). We find that degeneracies exist between the parameters inferred from the emulated SFRD, confirmed with new full cosmological simulations. We also find that the SMFs can break the degeneracy in the SFRD, which indicates that the SMFs provide complementary constraints for the parameters. Further, we find that a parameter combination inferred from an observationally inferred SFRD reproduces the target observed SFRD very well, whereas, in the case of the SMFs, the inferred and observed SMFs show significant discrepancies that indicate potential limitations of the current galaxy formation modeling and calibration framework, and/or systematic differences and inconsistencies between observations of the SMFs.more » « less
-
Abstract We present the Cosmology and Astrophysics with Machine Learning Simulations (CAMELS) Multifield Data set (CMD), a collection of hundreds of thousands of 2D maps and 3D grids containing many different properties of cosmic gas, dark matter, and stars from more than 2000 distinct simulated universes at several cosmic times. The 2D maps and 3D grids represent cosmic regions that span ∼100 million light-years and have been generated from thousands of state-of-the-art hydrodynamic and gravity-only N -body simulations from the CAMELS project. Designed to train machine-learning models, CMD is the largest data set of its kind containing more than 70 TB of data. In this paper we describe CMD in detail and outline a few of its applications. We focus our attention on one such task, parameter inference, formulating the problems we face as a challenge to the community. We release all data and provide further technical details at https://camels-multifield-dataset.readthedocs.io .more » « less
-
Abstract The Cosmology and Astrophysics with Machine Learning Simulations (CAMELS) project was developed to combine cosmology with astrophysics through thousands of cosmological hydrodynamic simulations and machine learning. CAMELS contains 4233 cosmological simulations, 2049 N -body simulations, and 2184 state-of-the-art hydrodynamic simulations that sample a vast volume in parameter space. In this paper, we present the CAMELS public data release, describing the characteristics of the CAMELS simulations and a variety of data products generated from them, including halo, subhalo, galaxy, and void catalogs, power spectra, bispectra, Ly α spectra, probability distribution functions, halo radial profiles, and X-rays photon lists. We also release over 1000 catalogs that contain billions of galaxies from CAMELS-SAM: a large collection of N -body simulations that have been combined with the Santa Cruz semianalytic model. We release all the data, comprising more than 350 terabytes and containing 143,922 snapshots, millions of halos, galaxies, and summary statistics. We provide further technical details on how to access, download, read, and process the data at https://camels.readthedocs.io .more » « less
An official website of the United States government
